Goto

Collaborating Authors

 weight 4


Heterogeneous Swarms: Jointly Optimizing Model Roles and Weights for Multi-LLM Systems

Feng, Shangbin, Wang, Zifeng, Goyal, Palash, Wang, Yike, Shi, Weijia, Xia, Huang, Palangi, Hamid, Zettlemoyer, Luke, Tsvetkov, Yulia, Lee, Chen-Yu, Pfister, Tomas

arXiv.org Artificial Intelligence

We propose Heterogeneous Swarms, an algorithm to design multi-LLM systems by jointly optimizing model roles and weights. We represent multi-LLM systems as directed acyclic graphs (DAGs) of LLMs with topological message passing for collaborative generation. Given a pool of LLM experts and a utility function, Heterogeneous Swarms employs two iterative steps: role-step and weight-step. For role-step, we interpret model roles as learning a DAG that specifies the flow of inputs and outputs between LLMs. Starting from a swarm of random continuous adjacency matrices, we decode them into discrete DAGs, call the LLMs in topological order, evaluate on the utility function (e.g. accuracy on a task), and optimize the adjacency matrices with particle swarm optimization based on the utility score. For weight-step, we assess the contribution of individual LLMs in the multi-LLM systems and optimize model weights with swarm intelligence. We propose JFK-score to quantify the individual contribution of each LLM in the best-found DAG of the role-step, then optimize model weights with particle swarm optimization based on the JFK-score. Experiments demonstrate that Heterogeneous Swarms outperforms 15 role- and/or weight-based baselines by 18.5% on average across 12 tasks. Further analysis reveals that Heterogeneous Swarms discovers multi-LLM systems with heterogeneous model roles and substantial collaborative gains, and benefits from the diversity of language models.


GraphInsight: Unlocking Insights in Large Language Models for Graph Structure Understanding

Cao, Yukun, Han, Shuo, Gao, Zengyi, Ding, Zezhong, Xie, Xike, Zhou, S. Kevin

arXiv.org Artificial Intelligence

Although Large Language Models (LLMs) have demonstrated potential in processing graphs, they struggle with comprehending graphical structure information through prompts of graph description sequences, especially as the graph size increases. We attribute this challenge to the uneven memory performance of LLMs across different positions in graph description sequences, known as "positional biases". To address this, we propose GraphInsight, a novel framework aimed at improving LLMs' comprehension of both macro-and micro-level graphical information. GraphInsight is grounded in two key strategies: 1) placing critical graphical information in positions where LLMs exhibit stronger memory performance, and 2) investigating a lightweight external knowledge base for regions with weaker memory performance, inspired by retrieval-augmented generation (RAG). Moreover, GraphInsight explores integrating these two strategies into LLM agent processes for composite graph tasks that require multi-step reasoning. Extensive empirical studies on benchmarks with a wide range of evaluation tasks show that GraphInsight significantly outperforms all other graph description methods (e.g., prompting techniques and reordering strategies) in understanding graph structures of varying sizes. Among these domains, leveraging LLMs to tackle applications involving graphs has emerged as a burgeoning field of research, as graphs represent fundamental structures that capture intricate relationships and interactions in the real world Wang et al. (2021); Xu (2021). For example, Fatemi et al. have explored the potential of LLMs by converting various types of graphs, such as knowledge graphs Baek et al. (2023); Pan et al. (2024) and social network graphs Santra (2024); Babic (2023), into natural language descriptions, thereby enabling LLMs to perform question-answering tasks related to these graphs. A key observation is that enhancing LLM performance in graph-related applications depends critically on LLMs' ability to comprehend graph structures through natural language descriptions. Existing studies Shang & Huang (2024); Li et al. (2023) primarily utilizes two direct methods to transform graphs into text inputs for LLMs: the structural format transforming, such as adjacency matrices (termed as AM) or lists (termed as AL) and the sequential format transforming, such as edge-by-edge These authors contributed equally to this work. However, extensive empirical studies Yuan et al. (2024) have shown that LLMs face significant challenges in understanding and reasoning about graph structures using current graph transformation methods, especially as graph size increases, leading to a "comprehension collapse". As shown in Figure 1 (a), several common LLMs perform poorly on graph structure understanding tasks (see benchmarks in Section 5.1), and their comprehension declines sharply as the graph size increases, ultimately leading to complete failure.